![]() Spatial motion vector prediction method and apparatus
专利摘要:
SPATIAL MOTION VECTOR PREDICTION METHOD AND APPARATUS An apparatus and method for deriving a motion vector predictor or a motion vector predictor candidate or a motion vector or a motion vector candidate of a current block is disclosed. In video encoding systems, spatial and temporal redundancy is exploited using spatial and temporal prediction to reduce the information to be transmitted or stored. Motion Vector Prediction (MVP) has been used to further conserve the bit rate associated with motion vector encoding. The MVP technique being developed for the current HEVC only considers the vector only considers the motion vector having the same reference list and the same reference image index, such as the current block as being a candidate spatial motion vector predictor available. It is desirable to develop an MVP scheme that can improve the availability of the motion vector predictor candidate based on spatially neighboring block motion vectors. Thus, an apparatus and method for determining a motion vector predictor or candidate of (...). 公开号:BR112013012832B1 申请号:R112013012832-1 申请日:2011-04-26 公开日:2021-08-31 发明作者:Jian-Liang Lin;Yu-Pao Tsai;Yu-Wen Huang;Shaw - Min Lei 申请人:Hfi Innovation Inc; IPC主号:
专利说明:
CROSS REFERENCE FOR RELATED ORDERS The present invention claims priority to Provisional Patent Application No. US 61/416,413, filed November 23, 2010, entitled "New Space Motion Vector Predictor" and Provisional Patent Application No. US 61/431,454, filed at January 11, 2011, titled "Enhanced Advanced Motion Vector Prediction". The provisional US patent applications are hereby incorporated by reference in their entirety. TECHNICAL FIELD The present invention relates to video encoding. In particular, the present invention relates to encoding techniques associated with motion vector prediction. FUNDAMENTALS In video coding systems, spatial and temporal redundancy is exploited using spatial and temporal prediction to reduce the information to be transmitted. Spatial and temporal prediction uses decoded pixels from the same image and reference images, respectively, to form prediction for actual pixels to be encoded. In a conventional encoding system, side information associated with spatial and temporal prediction may have to be transmitted which will take up some bandwidth of the compressed video data. Transmission of motion vectors for temporal prediction may require a considerable portion of the compressed video data, especially in low bit rate applications. To further reduce the bit rate associated with motion vectors, a technique called motion vector prediction (MVP) has been used in the field of video encoding in recent years. The MVP technique exploits the statistical redundancy between spatially and temporally neighboring motion vectors. In HEVC development, a technique called advanced motion vector prediction (AMVP) is being considered. The AMVP technique uses explicit predictor signaling to indicate the selected MVP candidate from a set of MVP candidate. The MVP candidate set includes spatial MVP candidates as well as temporal candidates, where the spatial MVP candidates include three candidates selected from three respective neighboring groups of the current block. The proposed MVP set for AMVP also includes the median of the three spatial candidates and one temporal MVP candidate. The AMVP technique only considers the MV (motion vector) with the same reference image list and the same reference image index as a current block as an available spatial MVP candidate. If the MV with the same reference image list and the same reference image index is not available, the AMVP technique searches for available motion vector from the next neighboring block in the group. It is highly desirable to develop an MVP system that can improve the availability of the neighboring block MVP candidate. The improved MVP scheme can cause smaller motion vector residues and hence the coding efficiency can be improved. Furthermore, it is desirable that the MVP scheme will allow the predictor to be derived at the decoder based on the decoded information so that no additional side information has to be transmitted. ABSTRACT An apparatus and method for deriving motion vector predictor or motion vector predictor candidate or motion vector or motion vector candidate for a current block in an image based on motion vectors of a spatially neighboring block is disclosed. In an embodiment according to the present invention, the apparatus and method for deriving motion vector predictor or motion vector predictor candidate or motion vector or motion vector candidate for a current block comprises the steps of receiving a first motion vector associated with a first reference image in a first reference image list and a second motion vector associated with a second reference image in a second reference image list of a spatially neighboring block and determining an MVP ( motion vector predictor), or at least one MVP candidate or one MV (motion vector) or at least one MV candidate associated with a selected reference image from a chosen reference image list for a current block based on the first motion vector, the second motion vector, the first reference image, the second reference image, and the reference image selected according to m the order of priority. The first reference image list and the second reference image list can be list 0 and list 1, or vice versa, and the chosen reference image list can be list 0 or list 1. The order of priority is predefined in one embodiment according to the present invention and the order of priority is determined according to an adaptive scheme in another embodiment according to the present invention. When the predefined order is used, information associated with the predefined priority order can be embedded in a sequence header, an image header, or a slice header. The adaptive scheme can be based on criteria selected from a group consisting of reconstructed motion vector statistics from previous blocks, current block partition type, motion vector correlation, motion vector directions and motion vector distance. The MVP or candidate MVP or the MV or candidate MV is based on a scaled version of the first motion vector and/or the second motion vector, or a combination of the scaled version and the reduced unscaled version of the first vector of motion and/or the second motion vector in an embodiment according to the present invention. In another embodiment according to the present invention, the derivation of an MVP or at least an MVP candidate or an MV or an MV candidate is based on a first condition and a second condition, wherein the first condition is related to the fact that the first motion vector exists and whether the first reference image is the same as the selected reference image, and the second condition is related to whether the second motion vector exists and whether the second reference image is the same as the selected reference image. Also, an MVP candidate set or an MVP candidate set can be derived based on the first motion vector and the second motion. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 illustrates DIRECT mode prediction motion vector scaling in B-slice encoding according to the technique. Figure 2 illustrates motion vector scaling in B-slice encoding based on a co-located motion vector of the previous first B frame according to a prior art. Figure 3 illustrates spatial motion vector prediction neighbor block configuration based on motion vectors of neighboring blocks in advanced motion vector prediction (AMVP) being considered for the HEVC pattern. Figure 4 illustrates an example of spatial MVP candidate derivation for a current block based on the reference images (refL0, refL1) and MVs (mvL0, mvL1) of neighboring block b with a predefined order. Figures 5A-B illustrate an example of determining the list 0 spatial MVP candidate (refL0b, mvL0b) for a current block from a neighboring block b with a predefined order. Figure 6 illustrates an example of spatial MVP candidate set derivation for a current block based on the reference images (refL0, refL1) and MVs (mvL0, mvL1) of the neighboring block with a predefined order. Figures 7A-B illustrate an example of determining the list 0 spatial MVP candidate set {(ref0L0b, mv0L0b), (ref1L0b, mv1L0b)} for a current block from a neighboring block b with a predefined order. Figure 8 illustrates an example of spatial MVP candidate derivation of a current block based on reference images (refL0, refL1) and scaled MVs (mvL0, mvL1) of neighboring block b with a predefined order. Figure 9 illustrates an example of determining the candidate spatial MVP of list 0 (refL0b, mvL0b) for a current block based on the scaled and unscheduled MVs of a neighboring block b with a predefined order. Figure 10 illustrates an example of spatial MVP candidate set derivation of a current block based on the reference images (refL0, refL1) and scaled and unscheduled MVs (mvL0, mvL1) of neighboring block b with a predefined order. Figure 11 illustrates an example of determining the list 0 spatial MVP candidate set {(ref0L0b, mv0L0b), (ref1L0b, mv1L0b)} for a current block based on scaled and unscheduled MVs of a neighboring block b with one order preset. DETAILED DESCRIPTION In video coding systems, spatial and temporal redundancy is exploited using spatial and temporal prediction to reduce the bit rate to be transmitted or stored. Spatial prediction uses pixels decoded from the same image to form the prediction for actual pixels to be encoded. Spatial prediction is often used on a block-by-block basis, such as 16x16 or 4x4 blocks for the luminance signal in Intra H.264/AVC encoding. In video sequences, neighboring images often have great similarities, and simply using image differences can effectively reduce the transmitted information associated with static background areas. However, moving objects in the video stream can result in substantial residuals and will require a higher bit rate to encode the residuals. Consequently, Motion Compensated Prediction (MCP) is often used to explore temporal correlation in video sequences. Motion Compensated Prediction can be used in a forward type prediction, where a current image block is predicted using the decoded image or images that are before the current image in display order. In addition to forward prediction, backward prediction can also be used to improve motion compensated prediction performance. Backward prediction uses a decoded image or images after the current image in display order. Since the first version of H.264/AVC was finalized in 2003, forward prediction and backward prediction have been extended to list 0 prediction and list 1 prediction, respectively, where both list 0 and list 1 can contain multiple images of reference before or / and after the current image in display order. The following describes the default reference image list configuration. For list 0, reference images earlier than the current image have lower reference image indices than those after the current image. For List 1, reference images later than the current image have lower reference image indices than those before the current image. For both list 0 and list 1, after applying the previous rules, the temporal distance is considered as follows: a reference image closer to the current image has a lower reference image index. To illustrate the reference image configuration of list 0 and list 1, the following example is provided where the current image is image 5 and images 0, 2, 4, 6, and 8 are reference images, where the numbers indicate the order display. List 0 reference images with ascending reference image indices and starting with index equal to zero are 4, 2, 0, 6, and 8. List 1 reference images with ascending reference image indices and starting with index equal to zero are 6, 8, 4, 2, and 0. The first reference image with index 0 is called a co-localized image, and in this example, with image 5 as the current image, image 6 is the co-localized image of list 1 , and Figure 4 is the co-localized image of list 0. When a block in a co-localized image of list 0 or list 1 has the same block location as the current block in the current image, it is called a co-localized block of list 0 and list 1, or called a block co-localized in list 0 or list 1. The unit used for motion estimation mode in older video standards such as MPEG-1, MPEG-2 and MPEG-4 is mainly based on macroblock. For H.264/AVC, 16x16 macroblock can be segmented into 16x16, 16x8, 8x16 and 8x8 blocks for motion estimation. In addition, the 8x8 block can be segmented into 8x8, 8x4, 4x8 and 4x4 blocks for motion estimation. For the High Efficiency Video Coding (HEVC) standard under development, the unit for motion estimation / compensation mode is called the unit of Prediction (PU), where the PU is hierarchically partitioned from a maximum block size. MCP type is selected for each slice in H.264/AVC standard. The slice that the Motion Compensated Prediction is constrained to the list 0 prediction is called the P-slice. For a B-slice, Motion Compensated Prediction also includes a list 1 prediction in addition to list 0 prediction. In video encoding systems, the motion vector and encoded residues are transmitted to a decoder to reconstruct the video on the decoder side. Furthermore, in a system with the flexible reference image structure, information associated with the selected reference images may also have to be transmitted. Motion vector transmission may require a visible portion of the overall bandwidth, particularly in low bit rate applications or in systems where motion vectors are associated with smaller blocks or greater precision of motion. To further reduce the bit rate associated with motion vector, a technique called motion vector prediction (MVP) has been used in the field of video encoding in recent years. In this disclosure, MVP may also refer to Motion Vector Predictor and the abbreviation is used when unambiguous. The MVP technique exploits the statistical redundancy between spatially and temporally neighboring motion vectors. When MVP is used, a predictor for the current motion vector is chosen and the motion vector residual, that is, the difference between the motion vector and the predictor, is transmitted. The MVP scheme can be applied in a closed-loop arrangement, where the predictor is derived at the decoder based on the decoded information and no additional side information has to be transmitted. Alternatively, side information can be explicitly transmitted in the bit stream to inform the decoder regarding the selected motion vector predictor. In the H.264/AVC standard, there is also a SKIP mode in addition to the conventional intra and intermodes for macroblocks in a P slice. motion, nor reference index parameter to be transmitted. The only information needed for the 16x16 macroblock in SKIP mode is a signal to indicate the SKIP mode being used and therefore substantial bit rate reduction is achieved. The motion vector used for SKIP macroblock reconstruction is similar to the motion vector predictor for a macroblock. A good MVP scheme can result in more zero motion vector residuals and zero quantized prediction errors. Therefore, a good MVP scheme can increase the number of SKIP encoded blocks and improve encoding efficiency. In the H.264/AVC standard, four different types of interprediction are supported for B-slices including list 0, list 1, bipredictive, and DIRECT prediction, where list 0 and list 1 refer to prediction using group 0 reference image and group 1, respectively. For bipredictive mode, the prediction signal is formed by a weighted average of list 0 and list 1 motion compensated prediction signals. The DIRECT prediction mode is inferred from previously passed syntax elements and can be either list 0 or list 1 prediction or bipredictive. Therefore, there is no need to transmit information to the motion vector in DIRECT mode. In the case where no quantized error signal is transmitted, the DIRECT macroblock mode is referred to as the SKIP B mode and the block can be efficiently coded. Again, a good MVP scheme can result in more zero motion vector residuals and smaller prediction errors. Therefore, a good MVP scheme can increase the number of DIRECT blocks encoded and improve encoding efficiency. In HEVC being developed, some motion vector prediction improvements over H.264/AVC are being considered. In this disclosure, a system and method for deriving motion vector predictor candidate for a current block based on motion vectors from a spatially neighboring block is disclosed. The motion vector for a current block is predicted by the motion vectors of neighboring blocks spatially associated with list 0 reference images and list 1 reference images. Motion vectors are considered as predictor candidates for the current block and the candidates are arranged in order of priority. The candidate with a higher order of priority will be considered as a predictor in front of a candidate with a lower order of priority. The advantage of priority-based MVP derivation is to increase spatial MVP candidate availability without the need for additional lateral information. In the H.264/AVC standard, the temporal DIRECT mode is used for B slices, where the motion vectors for a current block in slice B are derived from the motion vector of the block co-located in the first reference image of list 1, as shown in Figure 1. Motion vector derivation for temporal DIRECT mode is described in "Direct Mode Coding for Bipredictive Slices in the H.264 Standard", authored by Tourapis et al, in IEEE Trans, under Circuits and Systems for Video Technology, vol. 15, No. 1, pp.119-126, Jan. 2005. The motion vector for the colocalized block 120 of the first reference of list 1 is denoted i;l . The motion vectors for the current block 110 are denoted as '•" and with respect to the list 0 reference image and list 1 reference image. The temporal distance between the current image and the list 0 reference image is denoted as TDB and the temporal distance between the list 0 reference image and the list 1 reference image is denoted as TDD. The motion vectors for the current block can be obtained according to: The above equations were later replaced by: so that X and ScaleFactor can be pre-calculated at the slice / image level. In temporal DIRECT mode, motion vector prediction is only based on the motion vector for the co-located block of the first reference of list 1. In another prior art entitled "Optimized RD Coding for Motion Vector Predictor Selection", by Laroche et al, in IEEE Trans, in Circuits and Systems for Video Technology, vol. 18, No. 12, pp.1681-1691, December 2008, motion vector prediction selection based on motion vector competition is disclosed. The motion vector competition scheme uses rate of distortion (RD) optimization to determine the best motion vector predictor from motion vector predictor candidates. For example, as shown in Figure 2, the temporal motion vector predictor candidates can include the list 0 motion vector corresponding to the co-localized block in the co-localized image of list 1 Ref1, and the list 0 and list 1 motion vectors for a co-localized block in the co-localized image of list 0, B-1. The list 0 motion vectors corresponding to the block co-localized in the co-localized image of list 1 Ref1 can be calculated in the same way as defined in the H.264/AVC standard: List 0 and list 1 motion vectors for a block co-localized in list 0 co-localized image, B-1, can be used to derive motion vector predictor for the current block. If only the co-located motion vector mvcolB-1L0 in the B-1 image pointing to a forward P image exists, the motion predictors mvL03 and mvL13 can be calculated according to: The motion vector mvcolB-1L0 is represented in Figure 2 and dL0B-1 is the temporal distance between frame P forward and frame B-1. In the case of backward prediction, the mvL04 and mvL1 15 4 predictors can be calculated according to: The motion vector mvcolB-1L1 is the motion vector co-located in image B-1 pointing to the past frame P as illustrated in Figure 2. Depending on availability, the corresponding predictors in vector-based equations (7) - (12) 25 temporal motion vectors mvcolB-1L0 and mvcolB-1L1, and spatial motion vectors can be used for the current block and RD optimization is applied to choose the best motion vector predictor. The motion vector prediction scheme according to Laroche et al. will require side information to be transmitted to the decoder side to indicate the particular motion vector predictor selected. Transmitting lateral information associated with the selected motion vector predictor will consume some bandwidth. If the motion vector competition scheme is turned on or off, spatial and temporal motion vector prediction can be beneficial in reducing motion vector residuals. It is desirable to develop a spatial and/or temporal motion vector prediction technique to increase the availability of any spatial and/or temporal motion vector predictor without the need for lateral information, regardless of whether motion vector competition is used or not. This description focuses on the development of spatial motion vector prediction techniques to enhance any spatial motion vector predictor to improve the performance of a coding system for systems with motion vector competition as well as without motion vector competition. motion vector. In HEVC development, a technique called Advanced Motion Vector Prediction (AMVP) is proposed by McCann et al, entitled "Samsung's Answer to Call for Proposals in Video Compression Technology", Document JCTVC-A124, Collaboration Team joint video coding (JCT-VC) of ITU-T SG16 WP3 and ISO / IEC JTC1/SC29/WG1, 1st Meeting: Dresden, Germany, April 15-23, 2010. AMVP technique uses explicit predictor signaling to nominate the selected MVP candidate or set of MVP candidate. The MVP candidate set includes spatial MVP candidates as well as temporal candidates, where the spatial MVP candidates include three candidates a', b' and c' as shown in Figure 3. Candidate a' is the first motion vector available from of the group of blocks (a0, a1, ... ana) on the upper side of the current block as shown in Figure 3, where na is the number of blocks in that group. Candidate b’ is the first motion vector available from the block group (b0, b1, ... bnb) on the left side of the current block as shown in Figure 3, where nb is the number of blocks in this group. Candidate c’ is the first motion vector available from the group of blocks {c, d, e) at neighboring corners of the current block as shown in Figure 3. The MVP candidate set proposed by McCann et al. is defined as {median(a', b', c’), a', b’, c', temporal MVP candidate}. The temporal MVP candidate is a co-located MV. The AMVP technique being developed for HEVC only considers the MV (motion vector) with the same reference list and the same reference image index as an available spatial MVP candidate. If the MV with the same reference list and the same reference picture index is not available from the neighbor block, the AMVP technique searches for available motion vector from the next neighbor block in the group. It is highly desirable to develop an MVP scheme that can improve the availability of the motion vector predictor or motion vector predictor candidate of the neighboring block. The improved MVP scheme can cause smaller motion vector residues and hence the coding efficiency can be improved. Furthermore, it is desirable that the MVP system will allow the predictor to be derived at the decoder based on the decoded information so that no additional side information has to be transmitted. Therefore, a priority-based MVP scheme is revealed where candidate spatial MVP or spatial MVP can be derived from a spatially neighboring block based on different lists and different reference pictures of the spatially neighboring block. Figure 4 illustrates an example of spatial MVP candidate for a current block 410 from a neighbor block 420 derived based on the reference images (refL0, refL1) and MVs (mvL0, mvL1) of the neighbor block b with a predefined order, where mvL0 and refL0 denote the MV of list 0 and reference image of neighboring block b, and mvL1 and refL1 denote the MV of list 1 and reference image of neighboring block b. The MVP scheme extends the MVP candidates for both list 0 and list 1 and, in addition, different reference images from list 0 and list 1 can be used in the MVP derivation. Depending on the motion vector prediction setting, the motion vector derived from the spatially neighboring block can be used as the predictor for the current block, or the derived motion vector is one of several motion vector predictor candidates to to be considered by the current block for the motion vector predictor. In addition, some embodiments according to the present invention may derive more than motion vectors as motion vector predictor candidates, and the motion vector predictor candidates are collectively called a motion vector predictor candidate set. Candidate MVs will be selected in a predefined order or an adaptive order to save the necessary side information. The list 0 and list 1 reference images of VMs can be set to a predefined value (for example, the reference image index = 0) or explicitly sent. Figures 5A-B illustrate an example of determining a spatial MVP candidate according to the MVP scheme of Figure 4, where the MVP candidate for the current block is associated with a reference image in list 0 of the current block. Although the reference image in list 0 of the current block is used as an example, the current MVP scheme can also be applied to the candidate MVP for the current block associated with a reference image in list 1 of the current block. As shown in Figures 5A-B, the list 0 spatial MVP candidate (refL0b, mvL0b) for a current block is derived from a neighboring block b with a predefined order. In Figure 5A, the choice of candidate MVP for the current block 410 based on the neighbor block b 420 is considered first for (refL0, mvL0) if mvL0 exists and refL0 is the same as the list 0 reference picture of the current block . If mvL0 does not exist or refL0 is not the same as the list 0 reference picture of the current block, the process considers candidate MVP (refL1, mvL1) as the second choice for the current block 410 if there is mvL1 and refL1 is the same as the list 0 reference image of the current block. If mvL1 does not exist or refL1 is not the same as the list 0 reference picture of the current block, the candidate MVP (refL0b, mvL0b) for the current block is not available. The MVP candidate derivation process is described in the following pseudocodes: • If mvL0 exists and refL0 is the same as the list 0 reference picture of the current block, then refL0b = refL0 and mvL0b = mvL0; • So if mvL1 exists and refL1 is the same as the list 0 reference picture of the current block, then refL0b = refL1 and mvL0b = mvL1; • Otherwise, (refL0b, mvL0b) is not available. While Figures 5A-B illustrate an example of determining the list 0 spatial MVP candidate (refL0b, mvL0b) for the current block of a neighboring block b with a predefined order, a person skilled in the art can use other predefined priority orders to achieve the same or similar goal. In addition, although the list 0 reference image of the current block is used as an example of the above MVP scheme, a list 1 reference image of the current block can also be used. The embodiment according to the present invention shown in Figures 5A-B derives a single spatial MVP candidate for the current block from a neighboring block b with a predefined order. The MVP scheme can be extended to derive a spatial MVP candidate set, which can provide better MV prediction and/or provide more options for selecting the best MVP. An embodiment according to the present invention to derive a spatial MVP candidate set {(ref0L0b, mv0L0b), (ref1L0b, mv1L0b)} for a current block from a neighboring block b with a predefined order is shown in Figure 6. Again, mvL0 and refL0 denote the MV of list 0 and reference image of neighboring block b, and mvL1 and refL1 denote the MV of list 1 and reference image of neighboring block b. Figures 7A-B illustrate an example of determining the list 0 spatial MVP candidate set {(ref0L0b, mv0L0b), (ref1L0b, mv1L0b)} for the current block from neighboring block b with a predefined order. As shown in Figure 7A, the associated MV (refL0, mvL0) from the spatially neighboring block b is considered first. The candidate MVP (ref0L0b, mv0L0b) for the current block 410 based on the neighbor block b 420 is set to (refL0, mvL0) if mvL0 exists and refL0 is the same as the list 0 reference picture of the current block. If mvL0 does not exist or refL0 is not the same as the list 0 reference picture of the current block, the candidate MVP (ref0L0b, mv0L0b) is not available. As shown in Figure 7B, the associated MV (refL1, mvL1) from the spatially neighboring block b is then considered. The candidate MVP (ref1L0b, mv1L0b) for the current block 410 based on the neighbor block b 420 is set to (refL1, mvL1) if mvL1 exists and refL1 is the same as the list 0 reference picture of the current block. If mvL1 does not exist or refL1 is not the same as the list 0 reference picture of the current block, the candidate MVP (ref1L0b, mv1L0b) is not available. The MVP candidate set derivation process is described in the following pseudocodes: • If mvL0 exists and refL0 is the same as the list 0 reference picture of the current block, then ref1L0b = refL0 and mv0L0b = mvL0; • Otherwise, (ref0L0b, mv0L0b) is not available; • If mvL1 exists and refL1 is the same as the list 0 reference picture of the current block, then ref1L0b = refL1 and mv1L0b = mvL1; • Otherwise, (ref1L0b, mv1L0b) is not available. While Figures 7A-B illustrate an example of the determination of the list 0 spatial MVP candidate set {(Ref0L0b, mv0L0b), (ref1L0b, mv1L0b)} for the current block of a neighboring block b with a predefined order, an expert person in the technique you can use other predefined priority orders to achieve the same or similar goal. Also, although the list 0 reference image of the current block is used in the above MVP scheme, the list 1 reference image of the current block can also be used. While Figure 4 illustrates an example of deriving the spatial MVP candidate for a current block from a neighbor block based on the reference images (refL0, refL1) and MVs (mvL0, mvL1) of the neighbor block with a predefined order, a scaled version of (mvL0, mvL1) 810-820, can also be used to derive the candidate MVP in addition to the unscheduled one (mvL0, mvL1) as shown in Figure 8. The scaling factor is derived according to the temporal distance or the difference between image order counts, which can be positive as well as negative. Motion vector scaling examples disclosed by Tourapis et al, in IEEE Trans, in Circuits and Systems for Video Technology, vol. 15, No. 1, pp.119-126, Jan. 2005 and Laroche et al, in IEEE Trans, in Circuits and Systems for Video Technology, vol. 18, No. 12, pp. 1681-1691, December 2008 can be applied to obtain the scaled motion vectors. However, other temporal motion vector scaling can also be applied to obtain the scaled motion vector for the current MVP scheme. The list 0 and list 1 reference images of VMs can be set to a predefined value (eg reference image index = 0) or explicitly sent. An example of deriving the MVP candidate using scaled MVs (mvL0, mvL1) from neighboring block b is shown in Figure 9. The process of deriving the MVP candidate is similar to that shown in Figures 5A-B, except that a scaled MVP candidate is used when the neighboring block reference image is no longer the same as the list 0 reference image of the current block. The list reference image index of 0 for the current block in the example in Figure 9 is 0, that is, refL0b = j. The motion vector mvL0 is considered first. If mvL0 exists, the MVP candidate selection for the current block goes to mvL0 if refL0 is the same as the list 0 reference image of the current block. If refL0 is not the same as the list 0 reference picture of the current block, the scaled mvL0 810 is used as the candidate MVP as shown in Figure 9. If mvL0 does not exist, the MVP scheme then checks if mvL1 exists. If mvL1 exists, the MVP candidate selection for the current block goes to mvL1 if refL1 is the same as the list 0 reference image of the current block. If refL1 is not the same as the list 0 reference picture of the current block, the scaled mvL1 820 is selected as the MVP candidate as shown in Figure 9. The MVP candidate derivation process is described in the following pseudocodes: • If mvL0 exists, o If refL0 is the same as the list 0 reference image of the current block, then mvL0b = mvL0; o Otherwise, mvL0b = scaled mvL0; • Otherwise if mvL1 exists, o If refL1 is the same as the list 0 reference image of the current block, Then mvL0b = mvL1; • Otherwise, • mvL0b = scaled mvL1; • Otherwise, mvL0b is not available. While Figure 6 illustrates an example of a spatial MVP candidate defined for a current block from a neighboring block, based on reference images (refL0, refL1) and MVs (mvL0, mvL1) of the neighboring block with a predefined order, a scaled version of (mvL0, mvL1) can also be included in the defined MVP candidate set {( ref0L0b, mv0L0b), (ref1L0b, mv1L0b)} in addition to the unscheduled one (mvL0, mvL1) as shown in Figure 10. The scaling factor can be derived according to temporal distance as described in the examples mentioned above, and other scaling methods can be used as well. Similar to the example in Figure 8, the list 0 and list 1 reference images of VMs for the spatially neighboring block b 420 can be set to a predefined value (eg reference image index = 0) or explicitly sent . An example MVP scheme including scaled MVs (mvL0, mvL1) from neighbor block b 420 in candidate set MVP is shown in Figure 11, where the list 0 reference image index used in this example is 0, i.e., ref0L0b = ref1L0 = j. The process of deriving an MVP candidate set is similar to that shown in Figures 7A-B, except that scaled MVs can also be included in the candidate set. Again, mvL0 is considered to derive the first MVP candidate set. If mvL0 exists, the MVP candidate selection for the current block goes to mvL0, that is, mv0L0b = mvL0 if refL0 is the same as the list 0 reference image of the current block, as shown in Figure 11. If refL0 is not the same as the current block list 0 reference image, the scaled mvL0 810 is selected as the MVP candidate, that is, mv0L0b = scaled mvL0 as shown in Figure 11. The MVP scheme then checks if mvL1 exists. If there is mvL1, the selection of the second candidate MVP from the candidate set for the current block goes to mvL1, that is, mv1L0b = mvL1, if refL1 is the same as the list 0 reference picture of the current block. If refL1 is not the same as the list 0 reference image of the current block, the scaled mvL1 820 is selected as the second MVP candidate, that is, mv1L0b = scaled mvL1 from the candidate set, as shown in Figure 11. The process MVP candidate set derivation is described in the following pseudocodes: • If mvL0 exists, o If the refL0 is the same as the list 0 reference picture of the current block, then mv0L0b = mvL0; o Otherwise, mv0L0b = scaled mvL0; • Otherwise, mv0L0b is not available, • if mvLl exists, o If refL1 is the same as the list 0 reference image of the current block, then mv1L0b = mvL1; o Otherwise, mv1L0b = scaled mvL1; • Otherwise, mv1L0b is not available. It is obvious to those skilled in the art to modify the above arrangements so that only the scaled version of MVs is used to derive the MVP candidate or MVP candidate set. For example, the MVP candidate derivation process based on the scaled VMs is described in the following pseudocodes: • If mvL0 exists, mvL0b = scaled mvL0; • Otherwise if mvL1 exists, mvL0b = mvL1 scaling; • Otherwise, mvL0b is not available. The MVP candidate set derivation process based on the scaled VMs is described in the following pseudocodes: • If mvL0 exists, mv0L0b = scaled mvL0; • Otherwise, mv0L0b is not available, • If mvL1 exists, mv1L0b = mvL1 scaled; • Otherwise, mv1L0b is not available. In the above examples of motion vector prediction candidate derivation according to a predefined priority order, a respective priority order is used in each example to illustrate the process of deriving a predictor or predictor candidate based on motion vectors. movement of a spatially neighboring block. The particular priority order used is in no way interpreted as a limitation on the present invention. A person skilled in the art can choose different orders of priority for motion vector prediction candidates for practicing the present invention. Furthermore, although the above examples illustrate that the motion vector prediction order between candidates is determined according to a predefined priority order, the candidates priority order can also be performed according to an adaptive scheme. The adaptive priority ordering scheme can be based on the statistics of the reconstructed motion vectors from previous blocks, the partition type of the current block, the correlation between the motion vectors, the motion vector directions, and the distance between the vectors. of movement. Furthermore, the adaptive system can also be based on a combination of two or more of the factors mentioned above. When the statistic of the reconstructed motion vectors from previous blocks is used for the adaptive scheme, the statistic can be associated with the counts of the motion vector candidates, as an example. The order of priority is adapted to the counts of motion vector candidates, where the motion vector candidate with a higher count will be assigned a higher priority for the motion vector predictor. When the current block partition type is used for the adaptive scheme, for example, if a current encoding unit of size 2Nx2N is split into two rectangular prediction units of size Nx2N and the current block is the left prediction unit, the motion vector with greater similarity to the left neighbor of the current coding unit will be assigned a higher priority if a current coding unit of size 2Nx2N is divided into two rectangular prediction units of size Nx2N and the current block is the unit of right prediction, the motion vector with the greatest similarity to the right neighbor of the current encoding unit will be assigned a higher priority. When correlation between motion vectors is used for the adaptive scheme, the motion vector with the highest correlation will be assigned a higher priority. For example, if two motion vectors in the priority list are exactly the same, the motion vector is considered to have the highest correlation. When motion vector direction is used for adaptive scheme, the motion vector that points to the direction of the target reference image, as an example, will be assigned a higher priority. When the distance between motion vectors is used for the adaptive scheme, a shorter temporal distance for the motion vector from a current block to the target reference image, as an example, will be assigned a higher priority. Although the above examples illustrate the motion vector predictor or motion vector predictor candidate derivation of a current block relative to a reference image in list 0 of the current block, similar technique can be applied to derive vector predictor from motion or motion vector predictor candidate for the current block with respect to a reference picture in list 1 of the current block. Furthermore, while each exemplary MVP or MVP candidate derivation illustrated above adopts particular combinations of current block reference image list, spatially neighboring block reference image list, and unscaled motion vectors and block scaled motion vectors spatially neighbor, other combinations can also be used for candidate MVP derivation. Note that the present invention can be applied not only to INTER mode, but also SKIP, DIRECT, and MERGE modes. In INTER mode, given a current list, a motion vector predictor is used to predict the motion vector of a PU, and a motion vector residue is transmitted. The present invention can be applied to determine motion vector predictor when motion vector competition scheme is not used or to determine motion vector predictor candidate when motion vector competition scheme is used. As for SKIP, DIRECT, and MERGE, they can be considered as special cases of INTER mode, where the motion vector residue is not transmitted and always inferred as zero. In such cases, the present invention can be applied to determine the motion vector when the motion vector competition scheme is not used or to determine the motion vector candidate when the motion vector competition scheme is not used. Motion vector prediction mode according to the present invention as described above can be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be an integrated circuit on a video compression chip or program codes embedded in video compression software to perform the processing described herein. An embodiment of the present invention may also be program codes to be executed in a digital signal processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or a programmable field gate array (FPGA). These processors can be configured to perform specific tasks in accordance with the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. Software code or firmware codes can be developed in different programming languages and different formats or styles. Software code can also be compiled for different target platforms. However, different code formats, software code styles and languages, and other means of configuring the code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention. The invention can be embodied in other specific forms without departing from its spirit or essential characteristics. The examples described are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is therefore indicated by the appended claims rather than the foregoing description. All changes that come within the meaning and equivalence range of the claims must be embraced within their scope.
权利要求:
Claims (7) [0001] 1. Method of deriving motion vector predictor or motion vector predictor candidate or motion vector or motion vector candidate for a current block in an image based on motion vectors of a spatially neighboring block, the method comprising: receiving a first motion vector associated with a first reference image in a first reference image list and a second motion vector associated with a second reference image in a second reference image list of the spatially neighboring block, and selecting a of the first motion vector and the second motion vector as an MVP (motion vector predictor) or an MVP candidate or a MV (motion vector) or a MV candidate associated with a selected reference image in a reference image list. reference selected for the current block according to a priority order, characterized by the fact that the priority order is defined by checking first if the first motion vector exists and if the first reference image is the same as the selected reference image, and if this is not fulfilled, check if the second motion vector exists and if the second reference image is the same as the selected reference image. [0002] 2. Method according to claim 1, characterized in that the order of priority is a predefined order of priority. [0003] 3. Method according to claim 2, characterized in that the information associated with the predefined priority order is incorporated in a sequence header, an image header, or a slice header. [0004] 4. Method according to claim 1, characterized in that said MVP candidate or said MV candidate is part of an MVP candidate set or an MV candidate set having a first MVP candidate or a first MV candidate associated with the first reference image and a second MVP candidate or a second MV candidate associated with the second reference image. [0005] 5. Apparatus for deriving motion vector predictor or motion vector predictor candidate or motion vector or motion vector candidate for a current block in an image based on motion vectors of a spatially neighboring block, the apparatus comprising: means for receiving a first motion vector associated with a first reference image in a first reference image list and a second motion vector associated with a second reference image in a second reference image list of the spatially neighboring block, and means for selecting one of the first motion vector and the second motion vector as an MVP (motion vector predictor) or an MVP candidate or a MV (motion vector) or a MV candidate associated with a selected reference image in a selected list of reference image for the current block, according to a priority order, characterized by the fact that the priority order is def inida by first checking that the first motion vector exists and that the first reference image is the same as the selected reference image, and if this is not met, checking that the second motion vector exists and that the second reference image is the same as the selected reference image. [0006] 6. Apparatus according to claim 5, characterized in that the order of priority is a predefined order of priorities. [0007] 7. Apparatus according to claim 6, characterized in that the information associated with the predefined priority order is incorporated in a sequence header, an image header, or a slice header.
类似技术:
公开号 | 公开日 | 专利标题 BR112013012832B1|2021-08-31|Spatial motion vector prediction method and apparatus BR112012027263B1|2021-08-17|METHODS AND APPARATUS FOR DERIVING A TEMPORAL MOTION VECTOR PREDICTOR BR112013007057B1|2022-01-04|METHODS AND APPARATUS TO DERIVE MOTION VECTOR PREDICTOR OR CANDIDATE MOTION VECTOR PREDICTOR FOR CODING OR DECODING A MOTION VECTOR OF A PU | CURRENT IN AN IMAGE CN107181959B|2020-02-18|Method and apparatus for deriving motion vector predictor JP5770277B2|2015-08-26|Method and apparatus for deriving motion vector / motion vector predictor candidates for inter mode, skip mode and merge mode CN107105286B|2020-01-21|Method and apparatus for deriving motion vector predictor TWI737142B|2021-08-21|Method and apparatus of combined inter and intra prediction for video coding TWI738081B|2021-09-01|Methods and apparatuses of combining multiple predictors for block prediction in video coding systems TWI729497B|2021-06-01|Methods and apparatuses of combining multiple predictors for block prediction in video coding systems
同族专利:
公开号 | 公开日 BR112013012832A2|2016-08-23| WO2012068826A1|2012-05-31| EP2643970B1|2018-04-11| EP2643970A1|2013-10-02| RU2550526C2|2015-05-10| CN103202014B|2016-08-17| CN103202014A|2013-07-10| KR20130095295A|2013-08-27| KR101548063B1|2015-08-27| EP2643970A4|2014-08-27| RU2013127788A|2014-12-27| US20120128060A1|2012-05-24| US8824558B2|2014-09-02|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 EP2271106B1|2002-04-19|2016-05-25|Panasonic Intellectual Property Corporation of America|Motion vector calculating method| US7154952B2|2002-07-19|2006-12-26|Microsoft Corporation|Timestamp-independent motion vector prediction for predictive and bidirectionally predictive pictures| KR100865034B1|2002-07-18|2008-10-23|엘지전자 주식회사|Method for predicting motion vector| US7400681B2|2003-11-28|2008-07-15|Scientific-Atlanta, Inc.|Low-complexity motion vector prediction for video codec with two lists of reference pictures| JP4591657B2|2003-12-22|2010-12-01|キヤノン株式会社|Moving picture encoding apparatus, control method therefor, and program| US7667778B2|2004-04-09|2010-02-23|Sony Corporation|Image processing apparatus and method, and recording medium and program used therewith| KR100631768B1|2004-04-14|2006-10-09|삼성전자주식회사|Interframe Prediction Method and Video Encoder, Video Decoding Method and Video Decoder in Video Coding| JP5046335B2|2004-11-04|2012-10-10|トムソンライセンシング|Method and apparatus for fast mode determination of B frames in a video encoder| JP5055354B2|2006-03-30|2012-10-24|エルジーエレクトロニクスインコーポレイティド|Video signal decoding / encoding method and apparatus| CN100548049C|2007-01-09|2009-10-07|浙江大学|A kind of method of the H.264 fast motion estimation based on multi-reference frame| JP4826546B2|2007-06-18|2011-11-30|ソニー株式会社|Image processing apparatus, image processing method, and program| KR20090004661A|2007-07-04|2009-01-12|엘지전자 주식회사|Digital broadcasting system and method of processing data in digital broadcasting system| EP2266318B1|2008-03-19|2020-04-22|Nokia Technologies Oy|Combined motion vector and reference index prediction for video coding| CN102160384A|2008-09-24|2011-08-17|索尼公司|Image processing device and method| CN101686393B|2008-09-28|2012-10-17|华为技术有限公司|Fast-motion searching method and fast-motion searching device applied to template matching| HUE039661T2|2009-09-10|2019-01-28|Guangdong Oppo Mobile Telecommunications Corp Ltd|Speedup techniques for rate distortion optimized quantization| EP2687015A4|2011-03-14|2014-12-17|Mediatek Inc|Method and apparatus for deriving temporal motion vector prediction| US8755437B2|2011-03-17|2014-06-17|Mediatek Inc.|Method and apparatus for derivation of spatial motion vector candidate and motion vector prediction candidate|KR100955396B1|2007-06-15|2010-04-29|성균관대학교산학협력단|Bi-prediction coding method and apparatus, bi-prediction decoding method and apparatus, and recording midium| KR101279573B1|2008-10-31|2013-06-27|에스케이텔레콤 주식회사|Motion Vector Encoding/Decoding Method and Apparatus and Video Encoding/Decoding Method and Apparatus| CN106210737B|2010-10-06|2019-05-21|株式会社Ntt都科摩|Image prediction/decoding device, image prediction decoding method| CN107105289B|2010-12-13|2020-12-22|韩国电子通信研究院|Method for decoding video signal based on interframe prediction| US20130128983A1|2010-12-27|2013-05-23|Toshiyasu Sugio|Image coding method and image decoding method| US9049455B2|2010-12-28|2015-06-02|Panasonic Intellectual Property Corporation Of America|Image coding method of coding a current picture with prediction using one or both of a first reference picture list including a first current reference picture for a current block and a second reference picture list including a second current reference picture for the current block| GB2487197B|2011-01-11|2015-06-17|Canon Kk|Video encoding and decoding with improved error resilience| CN106851306B|2011-01-12|2020-08-04|太阳专利托管公司|Moving picture decoding method and moving picture decoding device| US9749657B2|2011-01-21|2017-08-29|Sharp Kabushiki Kaisha|Buffer compression for motion vector competition| US10404998B2|2011-02-22|2019-09-03|Sun Patent Trust|Moving picture coding method, moving picture coding apparatus, moving picture decoding method, and moving picture decoding apparatus| MX2013009864A|2011-03-03|2013-10-25|Panasonic Corp|Video image encoding method, video image decoding method, video image encoding device, video image decoding device, and video image encoding/decoding device.| CN107948657B|2011-03-21|2021-05-04|Lg 电子株式会社|Method of selecting motion vector predictor and apparatus using the same| MX2013010231A|2011-04-12|2013-10-25|Panasonic Corp|Motion-video encoding method, motion-video encoding apparatus, motion-video decoding method, motion-video decoding apparatus, and motion-video encoding/decoding apparatus.| US9247266B2|2011-04-18|2016-01-26|Texas Instruments Incorporated|Temporal motion data candidate derivation in video coding| PL2717573T3|2011-05-24|2018-09-28|Velos Media International Limited|Image encoding method, image encoding apparatus, image decoding method, image decoding apparatus, and image encoding/decoding apparatus| US9485518B2|2011-05-27|2016-11-01|Sun Patent Trust|Decoding method and apparatus with candidate motion vectors| MX2013012132A|2011-05-27|2013-10-30|Panasonic Corp|Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device.| KR101889582B1|2011-05-31|2018-08-20|선 페이턴트 트러스트|Video encoding method, video encoding device, video decoding method, video decoding device, and video encoding/decoding device| SG194746A1|2011-05-31|2013-12-30|Kaba Gmbh|Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device| US9131239B2|2011-06-20|2015-09-08|Qualcomm Incorporated|Unified merge mode and adaptive motion vector prediction mode candidates selection| KR102083012B1|2011-06-28|2020-02-28|엘지전자 주식회사|Method for setting motion vector list and apparatus using same| PL2728878T3|2011-06-30|2020-06-15|Sun Patent Trust|Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device| MX347793B|2011-08-03|2017-05-12|Panasonic Ip Corp America|Video encoding method, video encoding apparatus, video decoding method, video decoding apparatus, and video encoding/decoding apparatus.| GB2493755B|2011-08-17|2016-10-19|Canon Kk|Method and device for encoding a sequence of images and method and device for decoding a sequence of images| CN103891291A|2011-08-30|2014-06-25|诺基亚公司|An apparatus, a method and a computer program for video coding and decoding| RU2622849C1|2011-09-09|2017-06-20|Кт Корпорейшен|Method and device for decoding video signal| KR20130030181A|2011-09-16|2013-03-26|한국전자통신연구원|Method and apparatus for motion vector encoding/decoding using motion vector predictor| US9736489B2|2011-09-17|2017-08-15|Qualcomm Incorporated|Motion vector determination for video coding| GB2556695B|2011-09-23|2018-11-14|Kt Corp|Method for inducing a merge candidate block and device using the same| MX2014003991A|2011-10-19|2014-05-07|Panasonic Corp|Image encoding method, image encoding device, image decoding method, and image decoding device.| US20130114717A1|2011-11-07|2013-05-09|Qualcomm Incorporated|Generating additional merge candidates| US20130188716A1|2012-01-20|2013-07-25|Qualcomm Incorporated|Temporal motion vector predictor candidate| KR102030205B1|2012-01-20|2019-10-08|선 페이턴트 트러스트|Methods and apparatuses for encoding and decoding video using temporal motion vector prediction| EP2811743B1|2012-02-03|2021-03-03|Sun Patent Trust|Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device| WO2013132792A1|2012-03-06|2013-09-12|パナソニック株式会社|Method for coding video, method for decoding video, device for coding video, device for decoding video, and device for coding/decoding video| CA2864002A1|2012-04-24|2013-10-31|Mediatek Inc.|Method and apparatus of motion vector derivation for 3d video coding| CN104272743B|2012-05-09|2019-01-01|太阳专利托管公司|Execute method, coding and decoding methods and its device of motion-vector prediction| US20130329007A1|2012-06-06|2013-12-12|Qualcomm Incorporated|Redundancy removal for advanced motion vector predictionin three-dimensionalvideo coding| US9325990B2|2012-07-09|2016-04-26|Qualcomm Incorporated|Temporal motion vector prediction in video coding extensions| PL400344A1|2012-08-13|2014-02-17|Politechnika Poznanska|Method for determining the the motion vector predictor| US9491461B2|2012-09-27|2016-11-08|Qualcomm Incorporated|Scalable extensions to HEVC and temporal motion vector prediction| US9491459B2|2012-09-27|2016-11-08|Qualcomm Incorporated|Base layer merge and AMVP modes for video coding| EP2966868B1|2012-10-09|2018-07-18|HFI Innovation Inc.|Method for motion information prediction and inheritance in video coding| CN104904206B|2013-01-07|2018-08-28|联发科技股份有限公司|Spatial motion vector prediction derivation method and device| US20160134891A1|2013-04-23|2016-05-12|Samsung Electronics Co., Ltd.|Multi-viewpoint video encoding method using viewpoint synthesis prediction and apparatus for same, and multi-viewpoint video decoding method and apparatus for same| CN104244002B|2013-06-14|2019-02-05|北京三星通信技术研究有限公司|The acquisition methods and device of motion information in a kind of video coding/decoding| US9432685B2|2013-12-06|2016-08-30|Qualcomm Incorporated|Scalable implementation for parallel motion estimation regions| WO2015124110A1|2014-02-21|2015-08-27|Mediatek Singapore Pte. Ltd.|Method of video coding using prediction based on intra picture block copy| US9992512B2|2014-10-06|2018-06-05|Mediatek Inc.|Method and apparatus for motion vector predictor derivation| US10999595B2|2015-11-20|2021-05-04|Mediatek Inc.|Method and apparatus of motion vector prediction or merge candidate derivation for video coding| US10750203B2|2016-12-22|2020-08-18|Mediatek Inc.|Method and apparatus of adaptive bi-prediction for video coding| KR20180098161A|2017-02-24|2018-09-03|주식회사 케이티|Method and apparatus for processing a video signal| CN110419217A|2018-04-02|2019-11-05|深圳市大疆创新科技有限公司|Method and image processing apparatus for image procossing| US20190364295A1|2018-05-25|2019-11-28|Tencent America LLC|Method and apparatus for video coding| CN110536135B|2018-05-25|2021-11-05|腾讯美国有限责任公司|Method and apparatus for video encoding and decoding| CN110662072A|2018-06-29|2020-01-07|杭州海康威视数字技术股份有限公司|Motion information candidate list construction method and device and readable storage medium| US20210218955A1|2018-06-29|2021-07-15|Kt Corporation|Method and apparatus for processing video signal| US11019357B2|2018-08-07|2021-05-25|Qualcomm Incorporated|Motion vector predictor list generation| US10958932B2|2018-09-12|2021-03-23|Qualcomm Incorporated|Inter-prediction coding of video data using generated motion vector predictor list including non-adjacent blocks| CN112219400A|2018-11-06|2021-01-12|北京字节跳动网络技术有限公司|Location dependent storage of motion information| CN109348234B|2018-11-12|2021-11-19|北京佳讯飞鸿电气股份有限公司|Efficient sub-pixel motion estimation method and system|
法律状态:
2017-07-11| B25A| Requested transfer of rights approved|Owner name: HFI INNOVATION INC. (CN) | 2018-03-27| B15K| Others concerning applications: alteration of classification|Ipc: H04N 19/573 (2014.01), H04N 19/577 (2014.01) | 2018-12-26| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2020-03-31| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-06-08| B350| Update of information on the portal [chapter 15.35 patent gazette]| 2021-07-13| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-08-31| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 26/04/2011, OBSERVADAS AS CONDICOES LEGAIS. PATENTE CONCEDIDA CONFORME ADI 5.529/DF, QUE DETERMINA A ALTERACAO DO PRAZO DE CONCESSAO. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US41641310P| true| 2010-11-23|2010-11-23| US61/416,413|2010-11-23| US201161431454P| true| 2011-01-11|2011-01-11| US61/431,454|2011-01-11| US13/047,600|2011-03-14| US13/047,600|US8824558B2|2010-11-23|2011-03-14|Method and apparatus of spatial motion vector prediction| PCT/CN2011/073329|WO2012068826A1|2010-11-23|2011-04-26|Method and apparatus of spatial motion vector prediction| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|